Information Bottleneck in Control Tasks with Recurrent Spiking Neural Networks
نویسندگان
چکیده
The nervous system encodes continuous information from the environment in the form of discrete spikes, and then decodes these to produce smooth motor actions. Understanding how spikes integrate, represent, and process information to produce behavior is one of the greatest challenges in neuroscience. Information theory has the potential to help us address this challenge. Informational analyses of deep and feed-forward artificial neural networks solving static input-output tasks, have led to the proposal of the Information Bottleneck principle, which states that deeper layers encode more relevant yet minimal information about the inputs. Such an analyses on networks that are recurrent, spiking, and perform control tasks is relatively unexplored. Here, we present results from a Mutual Information analysis of a recurrent spiking neural network that was evolved to perform the classic pole-balancing task. Our results show that these networks deviate from the Information Bottleneck principle prescribed for feed-forward networks.
منابع مشابه
Learning Universal Computations with Spikes
Providing the neurobiological basis of information processing in higher animals, spiking neural networks must be able to learn a variety of complicated computations, including the generation of appropriate, possibly delayed reactions to inputs and the self-sustained generation of complex activity patterns, e.g. for locomotion. Many such computations require previous building of intrinsic world ...
متن کاملBiological and Functional Models of Learning in Networks of Spiking Neurons
Neural circuits generally process information in a massively parallel way and exhibit a communication between the constituent units based on spikes, i.e. binary events, therefore differing fundamentally from many artificial information processing and learning systems. In such neural circuits, synaptic plasticity is widely considered to be the main biophysical correlate of learning. This thesis ...
متن کاملLayer-wise Learning of Stochastic Neural Networks with Information Bottleneck
In this paper, we present a layer-wise learning of stochastic neural networks (SNNs) in an information-theoretic perspective. In each layer of an SNN, the compression and the relevance are defined to quantify the amount of information that the layer contains about the input space and the target space, respectively. We jointly optimize the compression and the relevance of all parameters in an SN...
متن کاملGradient Descent for Spiking Neural Networks
Much of studies on neural computation are based on network models of static neurons that produce analog output, despite the fact that information processing in the brain is predominantly carried out by dynamic neurons that produce discrete pulses called spikes. Research in spike-based computation has been impeded by the lack of efficient supervised learning algorithm for spiking networks. Here,...
متن کاملBiologically inspired neural networks for the control of embodied agents
This paper reviews models of neural networks suitable for the control of artificial intelligent agents interacting continuously with an environment. We first define the characteristics needed by those neural networks. We review several classes of neural models and compare them in respect of their suitability for embodied agent control. Among the classes of neural network models amenable to larg...
متن کامل